130 research outputs found
Heterogeneous concurrent computing with exportable services
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent
Security and Privacy Dimensions in Next Generation DDDAS/Infosymbiotic Systems: A Position Paper
AbstractThe omnipresent pervasiveness of personal devices will expand the applicability of the Dynamic Data Driven Application Systems (DDDAS) paradigm in innumerable ways. While every single smartphone or wearable device is potentially a sensor with powerful computing and data capabilities, privacy and security in the context of human participants must be addressed to leverage the infinite possibilities of dynamic data driven application systems. We propose a security and privacy preserving framework for next generation systems that harness the full power of the DDDAS paradigm while (1) ensuring provable privacy guarantees for sensitive data; (2) enabling field-level, intermediate, and central hierarchical feedback-driven analysis for both data volume mitigation and security; and (3) intrinsically addressing uncertainty caused either by measurement error or security-driven data perturbation. These thrusts will form the foundation for secure and private deployments of large scale hybrid participant-sensor DDDAS systems of the future
The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms
The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept of a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds
Publishing H2O pluglets in UDDI registries
Interoperability and standards, such as Grid Services are a focus of current Grid research. The intent is to facilitate resource virtualization, and to accommodate the intrinsic heterogeneity of resources in distributed environments. It is important that new and emerging metacomputing frameworks conform to these standards, in order to ensure interoperability with other grid solutions. In particular, the H2O metacomputing system offers several benefits, including lightweight operation, user-configurability, and selectable security levels. Its applicability would be enhanced even further through support for grid services and OGSA compliance. Code deployed into the H2O execution containers is referred to as pluglets. These pluglets constitute the end points of services in H2O, services that are to be made known through publication in a registry. In this contribution, we discuss a system pluglet, referred to as OGSAPluglet, that scans H2O execution containers for available services and publishes them into one or more UDDI registries. We also discuss in detail the algorithms that manage the publication of the appropriate WSDL and GSDL documents for the registration process
Eliciting the End-to-End Behavior of SOA Applications in Clouds
Availability and performance are key issues in SOA cloud applications. Those applications can be represented as a graph spanning multiple Cloud and on-premises environments, forming a very complex computing system that supports increasing numbers and types of users, business transactions, and usage scenarios. In order to rapidly find, predict, and proactively prevent root causes of issues, such as performance degradations and runtime errors, we developed a monitoring solution which is able to elicit the end-to-end behavior of those applications. We insert lightweight components into SOA frameworks and clients thereby keeping the monitoring impact minimal. Monitoring data collected from call chains is used to assist in issues related to performance, errors and alerts, as well as business and IT transactions
Recommended from our members
Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report
The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections
Detection of Distributed Attacks in Hybrid & Public Cloud Networks
International audienceIn this paper early detection of distributed attacks are discussed that are launched from multiple sites of the hybrid & public cloud networks. A prototype of Cloud Distributed Intrusion Detection System (CDIDS) is discussed with some basic experiments. The summation of security alerts has been applied which helps to detect distributed attacks while keeping the false positive at the minimum. Using the summation of security alerts mechanism the attacks that have slow iteration rate are detected at an early stage. The objective of our work is to propose a Security Management System (SMS) that can detect malicious activities as early as possible and camouflaging of attacks under the conditions when other security management systems become unstable due to intense events of attacks
Recommended from our members
Enhancing Functionality and Performance in the PVM Network Computing System. Final project report
Recommended from our members
Harness: Heterogeneous Adaptable Reconfigurable Networked Systems -- U.S. Department of Energy Grant DE-FG02-99ER25379 Final Project Report
Issues in reconfigurability and adaptability in heterogeneous distributed systems for high-performance computing are the focus of the work funded by this grant. Our efforts are part of an ongoing research project in metacomputing and are a follow on to the DOE funded PVM system that has witnessed over a decade of use at numerous institutions worldwide. The current project, termed Harness, investigates novel methodologies and tools for distributed metacomputing, focusing on dynamically reconfigurable software frameworks. During the first phase, we defined the metacomputing architecture embodied in Harness and developed prototype subsystems as proof of concept exercises. Subsequently, we designed and developed a complete software framework manifesting the Harness architecture, and also developed several tools and subsystems that demonstrated the viability and effectiveness of our proposed model for next generation metacomputing. We then used this substrate to emulate multiple programming environments on Harness, and conducted performance evaluation and tuning exercises. The main research results from these efforts include the establishment of software metacomputing systems as viable and cost-effective alternatives to MPPs; the demonstration of dynamic and reconfigurable platforms as effective methods of tailoring parallel computing environments; the development of methodologies to construct plugin modules for component-based distributed systems; contributions to performance modeling and optimization in emulated software environments; and software architectures for multi- and mixed-paradigm parallel distributed computing. Details and specifics on these and other results have been reported in numerous publications, and are manifested in software systems, all of which may be accessed at or via the website http://www.mathcs.emory.edu/harness
Heterogeneous Network Computing: The Next Generation
this paper, we discuss some selected aspects of heterogeneous computing in the context of the PVM system, and describe evolutionary enhancements to the system. These extensions, which involve performance optimization, light-weight processes, and client-server computing, suggest useful directions that the next generation of heterogeneous systems might follow. A prototype design of such a next-generation heterogeneous computing framework is also discussed. 1 Introduction Parallel computing methodologies using clusters of heterogeneous systems have demonstrated their viability in the past several years, both for highperformance scientific computing as well as for more "general purpose" applications. This approach to concurrent computation is based on the premise that a collection of independent computer systems, interconnected by networks, can be transformed into a coherent, powerful, and cost-effective concurrent computing resource, through the use of software frameworks. The most common methodology for realizing such a mode of computing is exemplified by PVM (Parallel Virtual Machine) -- a software framework that emulates a generalized distributed memory multiprocessor in heterogeneous networked environments
- …